注意机制在点云分析中发挥了越来越重要的作用,并且渠道注意是热点之一。通过这么多的频道信息,神经网络难以筛选有用的信道信息。因此,提出了一种自适应信道编码机制以在本文中捕获信道关系。它通过明确地编码其特征信道之间的相互依赖来提高网络生成的表示的质量。具体地,提出了一种通道 - 明智的卷积(通道-Chim)以自适应地学习坐标和特征之间的关系,以便编码信道。与流行的重量方案不同,本文提出的通道CONN实现了卷积操作的适应性,而不是简单地为频道分配不同的权重。对现有基准的广泛实验验证了我们的方法实现了艺术的状态。
translated by 谷歌翻译
变压器在各种计算机视觉地区发挥着越来越重要的作用,并且在点云分析中也取得了显着的成就。由于它们主要专注于点亮变压器,因此本文提出了一种自适应通道编码变压器。具体地,被设计为对频道的通道卷积旨在对信道进行编码。它可以通过捕获坐标和特征之间的潜在关系来编码特征通道。与简单地为每个通道分配注意重量相比,我们的方法旨在自适应地对信道进行编码。此外,我们的网络采用了邻域搜索方法的低级和高级双语义接收领域,以提高性能。广泛的实验表明,我们的方法优于三个基准数据集的最先进的点云分类和分段方法。
translated by 谷歌翻译
最近,深度神经网络在3D点云分类方面取得了显着成就。然而,现有的分类方法主要在理想化点云上实施,并在非理想情况下遭受重大降解的每种性能。为了处理该Prob-LEM,提出了一个名为双邻居深度融合网络(DNDFN)的特征表示学习方法,以用作非理想点云分类任务的改进点云编码器。 DNDFN利用称为TN学习的培训邻域学习方法来捕获全局关键邻域。然后,全球邻居与当地邻居融合,以帮助网络实现更强大的推理能力。此外,提出了一个信息传输卷积(IT-CONV)为DNDFN学习点对对之间的边缘信息,并使特征传输过程受益。 IT-CONV中的信息传输类似于图中的信息的传播,其使DNDF​​N更靠近人工理工模式。关于现有基准的广泛实验尤其是非理想的数据集验证了DNDFN和DNDFN实现了最先进的效果。
translated by 谷歌翻译
Despite some successful applications of goal-driven navigation, existing deep reinforcement learning-based approaches notoriously suffers from poor data efficiency issue. One of the reasons is that the goal information is decoupled from the perception module and directly introduced as a condition of decision-making, resulting in the goal-irrelevant features of the scene representation playing an adversary role during the learning process. In light of this, we present a novel Goal-guided Transformer-enabled reinforcement learning (GTRL) approach by considering the physical goal states as an input of the scene encoder for guiding the scene representation to couple with the goal information and realizing efficient autonomous navigation. More specifically, we propose a novel variant of the Vision Transformer as the backbone of the perception system, namely Goal-guided Transformer (GoT), and pre-train it with expert priors to boost the data efficiency. Subsequently, a reinforcement learning algorithm is instantiated for the decision-making system, taking the goal-oriented scene representation from the GoT as the input and generating decision commands. As a result, our approach motivates the scene representation to concentrate mainly on goal-relevant features, which substantially enhances the data efficiency of the DRL learning process, leading to superior navigation performance. Both simulation and real-world experimental results manifest the superiority of our approach in terms of data efficiency, performance, robustness, and sim-to-real generalization, compared with other state-of-art baselines. Demonstration videos are available at \colorb{https://youtu.be/93LGlGvaN0c.
translated by 谷歌翻译
Large-scale cross-modal pre-training paradigms have recently shown ubiquitous success on a wide range of downstream tasks, e.g., zero-shot classification, retrieval and image captioning. However, their successes highly rely on the scale and quality of web-crawled data that naturally contain incomplete and noisy information (e.g., wrong or irrelevant content). Existing works either design manual rules to clean data or generate pseudo-targets as auxiliary signals for reducing noise impact, which do not explicitly tackle both the incorrect and incomplete challenges simultaneously. In this paper, to automatically mitigate the impact of noise by solely mining over existing data, we propose a principled Noise-robust Language-Image Pre-training framework (NLIP) to stabilize pre-training via two schemes: noise-harmonization and noise-completion. First, in noise-harmonization scheme, NLIP estimates the noise probability of each pair according to the memorization effect of cross-modal transformers, then adopts noise-adaptive regularization to harmonize the cross-modal alignments with varying degrees. Second, in noise-completion scheme, to enrich the missing object information of text, NLIP injects a concept-conditioned cross-modal decoder to obtain semantic-consistent synthetic captions to complete noisy ones, which uses the retrieved visual concepts (i.e., objects' names) for the corresponding image to guide captioning generation. By collaboratively optimizing noise-harmonization and noise-completion schemes, our NLIP can alleviate the common noise effects during image-text pre-training in a more efficient way. Extensive experiments show the significant performance improvements of our NLIP using only 26M data over existing pre-trained models (e.g., CLIP, FILIP and BLIP) on 12 zero-shot classification datasets, MSCOCO image captioning and zero-shot image-text retrieval tasks.
translated by 谷歌翻译
为了应对人类检测对标签数据和隐私问题的不断增长的需求,合成数据已被用作替代品,并在人类检测和跟踪任务中显示出令人鼓舞的结果。我们参加了第七届基准测试多目标跟踪(BMTT)的研讨会,主题是“合成数据可以带我们多远”?我们的解决方案Pietrack是根据合成数据开发的,而无需使用任何预训练的权重。我们提出了一种自我监督的域适应方法,该方法能够减轻合成(例如Motsynth)和真实数据(例如Mot17)之间的域移位问题,而无需涉及额外的人类标签。通过利用拟议的多尺度合奏推理,我们在MOT17测试集中获得了58.7的最终HOTA得分,在挑战中排名第三。
translated by 谷歌翻译
基于变压器的监督预培训在重新识别(REID)中实现了良好的性能。但是,由于想象成和Reid数据集之间的域间隙,它通常需要更大的预训练数据集(例如,ImageNet-21k),以提高性能,因为变压器的强大数据拟合能力。为了解决这一挑战,这项工作可以分别从数据和模型结构的角度降低预训练和REID数据集之间的差距。我们首先调查在未标记的人物图像(Luperson DataSet)上的视觉变压器(VIV)的自我监督为了进一步降低域间隙并加速预训练,提出了灾难性的遗忘得分(CFS)来评估预训练和微调数据之间的差距。基于CFS,通过采样靠近下游REID数据的相关数据来选择一个子集,并从预训练的数据集中过滤无关数据。对于模型结构,提出了一种名为基于IBN的卷积词条(ICS)的特定于REID的模块来通过学习更不变的功能来弥合域间隙。已经进行了广泛的实验,以微调在监督学习,无监督域适应(UDA)和无监督的学习(USL)设置下进行预训练模型。我们成功将Luperson DataSet缩小为50%,没有性能下降。最后,我们在市场-1501和MSMT17上实现了最先进的表现。例如,我们的VIT-S / 16在Market1501上实现了91.3%/ 89.9%/ 89.6%用于监督/ UDA / USL REID的11501。代码和模型将发布到https://github.com/michuanhaohao/transreid -sl。
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Given the increasingly intricate forms of partial differential equations (PDEs) in physics and related fields, computationally solving PDEs without analytic solutions inevitably suffers from the trade-off between accuracy and efficiency. Recent advances in neural operators, a kind of mesh-independent neural-network-based PDE solvers, have suggested the dawn of overcoming this challenge. In this emerging direction, Koopman neural operator (KNO) is a representative demonstration and outperforms other state-of-the-art alternatives in terms of accuracy and efficiency. Here we present KoopmanLab, a self-contained and user-friendly PyTorch module of the Koopman neural operator family for solving partial differential equations. Beyond the original version of KNO, we develop multiple new variants of KNO based on different neural network architectures to improve the general applicability of our module. These variants are validated by mesh-independent and long-term prediction experiments implemented on representative PDEs (e.g., the Navier-Stokes equation and the Bateman-Burgers equation) and ERA5 (i.e., one of the largest high-resolution data sets of global-scale climate fields). These demonstrations suggest the potential of KoopmanLab to be considered in diverse applications of partial differential equations.
translated by 谷歌翻译